Skip to content

[GLUTEN-11550][VL][UT] Enable Variant test suites#11726

Open
baibaichen wants to merge 1 commit intoapache:mainfrom
baibaichen:fix/GlutenVariantEndToEndSuite-GlutenVariantShreddingSuite
Open

[GLUTEN-11550][VL][UT] Enable Variant test suites#11726
baibaichen wants to merge 1 commit intoapache:mainfrom
baibaichen:fix/GlutenVariantEndToEndSuite-GlutenVariantShreddingSuite

Conversation

@baibaichen
Copy link
Contributor

@baibaichen baibaichen commented Mar 9, 2026

What changes are proposed in this pull request?

Enable GlutenVariantEndToEndSuite, GlutenVariantShreddingSuite, and GlutenParquetVariantShreddingSuite for both spark40 and spark41.

Four fixes:

  1. VeloxValidatorApi.scala: Detect variant shredded structs (produced by Spark's PushVariantIntoScan) by checking for __VARIANT_METADATA_KEY metadata on struct fields. Triggers fallback to Spark's native Parquet reader since Velox cannot read variant shredding encoding.

  2. VeloxSparkPlanExecApi.scala + ExpressionRestrictions.scala: Reject to_json with non-struct/map/array child types (e.g. VariantType), falling back to Spark since Velox does not support VariantType in to_json.

  3. Spark41Shims.scala + ParquetMetadataUtils.scala + VeloxBackend.scala: Detect Parquet variant logical type annotations and fall back to vanilla Spark when PARQUET_IGNORE_VARIANT_ANNOTATION is not set, since Velox native reader does not check variant annotations.

  4. pom.xml: Add -Dfile.encoding=UTF-8 to test JVM args. On JDK 17 and earlier, Charset.defaultCharset() is determined by the OS locale. On CI containers (centos-8/9) where LANG=C, the default charset is US-ASCII. JDK 18+ changed this via JEP 400 to always default to UTF-8 regardless of locale. Spark's VariantUtil.getString() uses new String(byte[], offset, length) without specifying charset, causing garbled output for multi-byte characters (e.g. Chinese) on JDK 17 + LANG=C.

How was this patch tested?

  • spark40: GlutenVariantEndToEndSuite 14✅, GlutenVariantShreddingSuite 8✅, GlutenParquetVariantShreddingSuite 5✅
  • spark41: GlutenVariantEndToEndSuite 14✅, GlutenVariantShreddingSuite 8✅, GlutenParquetVariantShreddingSuite 7✅
  • Verified parse_json/to_json round-trip passes under LANG=C LC_ALL=C with -Dfile.encoding=UTF-8

Was this patch authored or co-authored using generative AI tooling?

Generated-by: GitHub Copilot CLI

Related issue: #11550
Note: This PR subsumes #11723 (GlutenParquetVariantShreddingSuite).

@github-actions github-actions bot added CORE works for Gluten Core VELOX labels Mar 9, 2026
@baibaichen baibaichen force-pushed the fix/GlutenVariantEndToEndSuite-GlutenVariantShreddingSuite branch 3 times, most recently from 4a21089 to 794a1fe Compare March 11, 2026 05:13
@baibaichen baibaichen changed the title [GLUTEN-11550][VL][UT] Enable GlutenVariantEndToEndSuite and GlutenVariantShreddingSuite [GLUTEN-11550][VL][UT] Enable Variant test suites Mar 11, 2026
@github-actions
Copy link

Run Gluten Clickhouse CI on x86

Enable GlutenVariantEndToEndSuite, GlutenVariantShreddingSuite, and
GlutenParquetVariantShreddingSuite for both spark40 and spark41.

Fixes:
1. VeloxValidatorApi: Detect variant shredded structs (produced by
   Spark's PushVariantIntoScan) by checking __VARIANT_METADATA_KEY
   metadata. Triggers fallback to Spark's native Parquet reader.

2. VeloxSparkPlanExecApi: Reject to_json with non-struct/map/array
   child types (e.g. VariantType), falling back to Spark since Velox
   does not support VariantType in to_json.

3. Spark41Shims: Detect Parquet variant logical type annotations and
   fall back to vanilla Spark when PARQUET_IGNORE_VARIANT_ANNOTATION
   is not set, since Velox native reader does not check variant
   annotations.

4. pom.xml: Add -Dfile.encoding=UTF-8 to test JVM args.

   On JDK 17 and earlier, java.nio.charset.Charset.defaultCharset()
   is determined by the OS locale. On CI containers (centos-8/9)
   where LANG=C, the default charset is US-ASCII (ANSI_X3.4-1968).
   JDK 18+ changed this via JEP 400 (https://openjdk.org/jeps/400)
   to always default to UTF-8 regardless of locale.

   Spark's VariantUtil.getString() uses new String(byte[], offset,
   length) without specifying charset, which decodes using the JVM
   default charset. With JDK 17 + LANG=C, UTF-8 encoded multi-byte
   characters (e.g. Chinese) are decoded as ASCII, producing garbled
   output.

   Call chain:
     VariantEndToEndSuite.check("\"你好,世界...\"")
     -> to_json(parse_json(col("v")))
     -> StructsToJsonEvaluator.evaluate()
     -> JacksonGenerator.write(VariantVal)
     -> VariantVal.toJson()
     -> Variant.toJsonImpl()
     -> VariantUtil.getString(byte[], pos)
     -> new String(value, start, length)  // no charset specified
   https://github.com/apache/spark/blob/v4.0.1/common/variant/src/main/java/org/apache/spark/types/variant/VariantUtil.java#L508
   https://github.com/apache/spark/blob/v4.1.0/common/variant/src/main/java/org/apache/spark/types/variant/VariantUtil.java#L509

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@baibaichen baibaichen force-pushed the fix/GlutenVariantEndToEndSuite-GlutenVariantShreddingSuite branch from 794a1fe to c6759c0 Compare March 11, 2026 09:53
@github-actions
Copy link

Run Gluten Clickhouse CI on x86

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CORE works for Gluten Core VELOX

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant